19 research outputs found

    Generic Correlation Increases Noncoherent MIMO Capacity

    Get PDF
    We study the high-SNR capacity of MIMO Rayleigh block-fading channels in the noncoherent setting where neither transmitter nor receiver has a priori channel state information. We show that when the number of receive antennas is sufficiently large and the temporal correlation within each block is "generic" (in the sense used in the interference-alignment literature), the capacity pre-log is given by T(1-1/N) for T<N, where T denotes the number of transmit antennas and N denotes the block length. A comparison with the widely used constant block-fading channel (where the fading is constant within each block) shows that for a large block length, generic correlation increases the capacity pre-log by a factor of about four.Comment: To be presented at IEEE Int. Symp. Inf. Theory (ISIT) 2013, Istanbul, Turke

    Information Bottleneck on General Alphabets

    Full text link
    We prove rigorously a source coding theorem that can probably be considered folklore, a generalization to arbitrary alphabets of a problem motivated by the Information Bottleneck method. For general random variables (Y,X)(Y, X), we show essentially that for some nNn \in \mathbb{N}, a function ff with rate limit logfnR\log|f| \le nR and I(Yn;f(Xn))nSI(Y^n; f(X^n)) \ge nS exists if and only if there is a random variable UU such that the Markov chain YXUY - X - U holds, I(U;X)RI(U; X) \le R and I(U;Y)SI(U; Y) \ge S. The proof relies on the well established discrete case and showcases a technique for lifting discrete coding theorems to arbitrary alphabets.Comment: extended version, presented at ISIT 2018, Vail, C

    Lossy Compression of General Random Variables

    Full text link
    This paper is concerned with the lossy compression of general random variables, specifically with rate-distortion theory and quantization of random variables taking values in general measurable spaces such as, e.g., manifolds and fractal sets. Manifold structures are prevalent in data science, e.g., in compressed sensing, machine learning, image processing, and handwritten digit recognition. Fractal sets find application in image compression and in the modeling of Ethernet traffic. Our main contributions are bounds on the rate-distortion function and the quantization error. These bounds are very general and essentially only require the existence of reference measures satisfying certain regularity conditions in terms of small ball probabilities. To illustrate the wide applicability of our results, we particularize them to random variables taking values in i) manifolds, namely, hyperspheres and Grassmannians, and ii) self-similar sets characterized by iterated function systems satisfying the weak separation property

    Oversampling Increases the Pre-Log of Noncoherent Rayleigh Fading Channels

    Get PDF
    We analyze the capacity of a continuous-time, time-selective, Rayleigh block-fading channel in the high signal-to-noise ratio (SNR) regime. The fading process is assumed stationary within each block and to change independently from block to block; furthermore, its realizations are not known a priori to the transmitter and the receiver (noncoherent setting). A common approach to analyzing the capacity of this channel is to assume that the receiver performs matched filtering followed by sampling at symbol rate (symbol matched filtering). This yields a discrete-time channel in which each transmitted symbol corresponds to one output sample. Liang & Veeravalli (2004) showed that the capacity of this discrete-time channel grows logarithmically with the SNR, with a capacity pre-log equal to 1Q/N1-{Q}/{N}. Here, NN is the number of symbols transmitted within one fading block, and QQ is the rank of the covariance matrix of the discrete-time channel gains within each fading block. In this paper, we show that symbol matched filtering is not a capacity-achieving strategy for the underlying continuous-time channel. Specifically, we analyze the capacity pre-log of the discrete-time channel obtained by oversampling the continuous-time channel output, i.e., by sampling it faster than at symbol rate. We prove that by oversampling by a factor two one gets a capacity pre-log that is at least as large as 11/N1-1/N. Since the capacity pre-log corresponding to symbol-rate sampling is 1Q/N1-Q/N, our result implies indeed that symbol matched filtering is not capacity achieving at high SNR.Comment: To appear in the IEEE Transactions on Information Theor

    Lossless Linear Analog Compression

    Get PDF
    We establish the fundamental limits of lossless linear analog compression by considering the recovery of random vectors xRm{\boldsymbol{\mathsf{x}}}\in{\mathbb R}^m from the noiseless linear measurements y=Ax{\boldsymbol{\mathsf{y}}}=\boldsymbol{A}{\boldsymbol{\mathsf{x}}} with measurement matrix ARn×m\boldsymbol{A}\in{\mathbb R}^{n\times m}. Specifically, for a random vector xRm{\boldsymbol{\mathsf{x}}}\in{\mathbb R}^m of arbitrary distribution we show that x{\boldsymbol{\mathsf{x}}} can be recovered with zero error probability from n>infdimMB(U)n>\inf\underline{\operatorname{dim}}_\mathrm{MB}(U) linear measurements, where dimMB()\underline{\operatorname{dim}}_\mathrm{MB}(\cdot) denotes the lower modified Minkowski dimension and the infimum is over all sets URmU\subseteq{\mathbb R}^{m} with P[xU]=1\mathbb{P}[{\boldsymbol{\mathsf{x}}}\in U]=1. This achievability statement holds for Lebesgue almost all measurement matrices A\boldsymbol{A}. We then show that ss-rectifiable random vectors---a stochastic generalization of ss-sparse vectors---can be recovered with zero error probability from n>sn>s linear measurements. From classical compressed sensing theory we would expect nsn\geq s to be necessary for successful recovery of x{\boldsymbol{\mathsf{x}}}. Surprisingly, certain classes of ss-rectifiable random vectors can be recovered from fewer than ss measurements. Imposing an additional regularity condition on the distribution of ss-rectifiable random vectors x{\boldsymbol{\mathsf{x}}}, we do get the expected converse result of ss measurements being necessary. The resulting class of random vectors appears to be new and will be referred to as ss-analytic random vectors

    Lossless Analog Compression

    Full text link
    We establish the fundamental limits of lossless analog compression by considering the recovery of arbitrary m-dimensional real random vectors x from the noiseless linear measurements y=Ax with n x m measurement matrix A. Our theory is inspired by the groundbreaking work of Wu and Verdu (2010) on almost lossless analog compression, but applies to the nonasymptotic, i.e., fixed-m case, and considers zero error probability. Specifically, our achievability result states that, for almost all A, the random vector x can be recovered with zero error probability provided that n > K(x), where K(x) is given by the infimum of the lower modified Minkowski dimension over all support sets U of x. We then particularize this achievability result to the class of s-rectifiable random vectors as introduced in Koliander et al. (2016); these are random vectors of absolutely continuous distribution---with respect to the s-dimensional Hausdorff measure---supported on countable unions of s-dimensional differentiable submanifolds of the m-dimensional real coordinate space. Countable unions of differentiable submanifolds include essentially all signal models used in the compressed sensing literature. Specifically, we prove that, for almost all A, s-rectifiable random vectors x can be recovered with zero error probability from n>s linear measurements. This threshold is, however, found not to be tight as exemplified by the construction of an s-rectifiable random vector that can be recovered with zero error probability from n<s linear measurements. This leads us to the introduction of the new class of s-analytic random vectors, which admit a strong converse in the sense of n greater than or equal to s being necessary for recovery with probability of error smaller than one. The central conceptual tools in the development of our theory are geometric measure theory and the theory of real analytic functions
    corecore